Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher.
Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?
Some links on this page may take you to non-federal websites. Their policies may differ from this site.
-
Learned olfactory-guided navigation is a powerful platform for studying how a brain generates goal-directed behaviors. However, the quantitative changes that occur in sensorimotor transformations and the underlying neural circuit substrates to generate such learning-dependent navigation is still unclear. Here we investigate learned sensorimotor processing for navigation in the nematodeCaenorhabditis elegansby measuring and modeling experience-dependent odor and salt chemotaxis. We then explore the neural basis of learned odor navigation through perturbation experiments. We develop a novel statistical model to characterize how the worm employs two behavioral strategies: a biased random walk and weathervaning. We infer weights on these strategies and characterize sensorimotor kernels that govern them by fitting our model to the worm’s time-varying navigation trajectories and precise sensory experiences. After olfactory learning, the fitted odor kernels reflect how appetitive and aversive trained worms up- and down-regulate both strategies, respectively. The model predicts an animal’s past olfactory learning experience with > 90%accuracy given finite observations, outperforming a classical chemotaxis metric. The model trained on natural odors further predicts the animals’ learning-dependent response to optogenetically induced odor perception. Our measurements and model show that behavioral variability is altered by learning—trained worms exhibit less variable navigation than naive ones. Genetically disrupting individual interneuron classes downstream of an odor-sensing neuron reveals that learned navigation strategies are distributed in the network. Together, we present a flexible navigation algorithm that is supported by distributed neural computation in a compact brain.more » « lessFree, publicly-accessible full text available March 21, 2026
-
Louis, Matthieu (Ed.)Imaging neural activity in a behaving animal presents unique challenges in part because motion from an animal’s movement creates artifacts in fluorescence intensity time-series that are difficult to distinguish from neural signals of interest. One approach to mitigating these artifacts is to image two channels simultaneously: one that captures an activity-dependent fluorophore, such as GCaMP, and another that captures an activity-independent fluorophore such as RFP. Because the activity-independent channel contains the same motion artifacts as the activity-dependent channel, but no neural signals, the two together can be used to identify and remove the artifacts. However, existing approaches for this correction, such as taking the ratio of the two channels, do not account for channel-independent noise in the measured fluorescence. Here, we present Two-channel Motion Artifact Correction (TMAC), a method which seeks to remove artifacts by specifying a generative model of the two channel fluorescence that incorporates motion artifact, neural activity, and noise. We use Bayesian inference to infer latent neural activity under this model, thus reducing the motion artifact present in the measured fluorescence traces. We further present a novel method for evaluating ground-truth performance of motion correction algorithms by comparing the decodability of behavior from two types of neural recordings; a recording that had both an activity-dependent fluorophore and an activity-independent fluorophore (GCaMP and RFP) and a recording where both fluorophores were activity-independent (GFP and RFP). A successful motion correction method should decode behavior from the first type of recording, but not the second. We use this metric to systematically compare five models for removing motion artifacts from fluorescent time traces. We decode locomotion from a GCaMP expressing animal 20x more accurately on average than from control when using TMAC inferred activity and outperforms all other methods of motion correction tested, the best of which were ~8x more accurate than control.more » « less
-
Abstract A central question in neuroscience is how sensory inputs are transformed into percepts. At this point, it is clear that this process is strongly influenced by prior knowledge of the sensory environment. Bayesian ideal observer models provide a useful link between data and theory that can help researchers evaluate how prior knowledge is represented and integrated with incoming sensory information. However, the statistical prior employed by a Bayesian observer cannot be measured directly, and must instead be inferred from behavioral measurements. Here, we review the general problem of inferring priors from psychophysical data, and the simple solution that follows from assuming a prior that is a Gaussian probability distribution. As our understanding of sensory processing advances, however, there is an increasing need for methods to flexibly recover the shape of Bayesian priors that are not well approximated by elementary functions. To address this issue, we describe a novel approach that applies to arbitrary prior shapes, which we parameterize using mixtures of Gaussian distributions. After incorporating a simple approximation, this method produces an analytical solution for psychophysical quantities that can be numerically optimized to recover the shapes of Bayesian priors. This approach offers advantages in flexibility, while still providing an analytical framework for many scenarios. We provide a MATLAB toolbox implementing key computations described herein.more » « less
An official website of the United States government
